Stochastic Subgradient Method Converges on Tame Functions

نویسندگان
چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Stochastic subgradient method converges at the rate O(k-1/4) on weakly convex functions

We prove that the projected stochastic subgradient method, applied to a weakly convex problem, drives the gradient of the Moreau envelope to zero at the rateO(k−1/4).

متن کامل

Stochastic Subgradient Methods

Stochastic subgradient methods play an important role in machine learning. We introduced the concepts of subgradient methods and stochastic subgradient methods in this project, discussed their convergence conditions as well as the strong and weak points against their competitors. We demonstrated the application of (stochastic) subgradient methods to machine learning with a running example of tr...

متن کامل

Stochastic Subgradient MCMC Methods

Many Bayesian models involve continuous but non-differentiable log-posteriors, including the sparse Bayesian methods with a Laplace prior and the regularized Bayesian methods with maxmargin posterior regularization that acts like a likelihood term. In analogy to the popular stochastic subgradient methods for deterministic optimization, we present the stochastic subgradient MCMC for efficient po...

متن کامل

Accelerate Stochastic Subgradient Method by Leveraging Local Error Bound

In this paper, we propose two accelerated stochastic subgradient methods for stochastic non-strongly convex optimization problems by leveraging a generic local error bound condition. The novelty of the proposed methods lies at smartly leveraging the recent historical solution to tackle the variance in the stochastic subgradient. The key idea of both methods is to iteratively solve the original ...

متن کامل

Proximally Guided Stochastic Subgradient Method for Nonsmooth, Nonconvex Problems

In this paper, we introduce a stochastic projected subgradient method for weakly convex (i.e., uniformly prox-regular) nonsmooth, nonconvex functions—a wide class of functions which includes the additive and convex composite classes. At a high-level, the method is an inexact proximal point iteration in which the strongly convex proximal subproblems are quickly solved with a specialized stochast...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Foundations of Computational Mathematics

سال: 2019

ISSN: 1615-3375,1615-3383

DOI: 10.1007/s10208-018-09409-5